Goto

Collaborating Authors

 eu regulation


Aligning XAI with EU Regulations for Smart Biomedical Devices: A Methodology for Compliance Analysis

Sovrano, Francesco, Lognoul, Michael, Vilone, Giulia

arXiv.org Artificial Intelligence

Significant investment and development have gone into integrating Artificial Intelligence (AI) in medical and healthcare applications, leading to advanced control systems in medical technology. However, the opacity of AI systems raises concerns about essential characteristics needed in such sensitive applications, like transparency and trustworthiness. Our study addresses these concerns by investigating a process for selecting the most adequate Explainable AI (XAI) methods to comply with the explanation requirements of key EU regulations in the context of smart bioelectronics for medical devices. The adopted methodology starts with categorising smart devices by their control mechanisms (open-loop, closed-loop, and semi-closed-loop systems) and delving into their technology. Then, we analyse these regulations to define their explainability requirements for the various devices and related goals. Simultaneously, we classify XAI methods by their explanatory objectives. This allows for matching legal explainability requirements with XAI explanatory goals and determining the suitable XAI algorithms for achieving them. Our findings provide a nuanced understanding of which XAI algorithms align better with EU regulations for different types of medical devices. We demonstrate this through practical case studies on different neural implants, from chronic disease management to advanced prosthetics. This study fills a crucial gap in aligning XAI applications in bioelectronics with stringent provisions of EU regulations. It provides a practical framework for developers and researchers, ensuring their AI innovations advance healthcare technology and adhere to legal and ethical standards.


The global AI race--it's time to slow down

#artificialintelligence

The world's largest companies cannot be given free rein in their competition to capitalise artificial intelligence. What is the best way to develop artificial intelligence? This question, long theoretical, is quickly becoming a hands-on concern, which will soon demand that important strategic choices be made. We are seeing two completely different approaches play out before our eyes. One is the race among global technology giants which began with the recent launch of the Microsoft-funded ChatGPT, already provoking promises of similar systems from Google and the Chinese company Baidu.


Artificial intelligence is the new frontier of ethical tests

#artificialintelligence

WITHOUT a doubt, there is vast potential for advancement and benefit to society arising out of the application of artificial intelligence. Around half of businesses plan to use AI or advanced machine learning in some capacity in the next three years. Transport Secretary Grant Shapps has said self-driving cars could be on our roads as early as next year. This has, predictably, put the debate over artificial intelligence centre stage. What circumstances are cars taught to anticipate?


AI Regulation Threatens Financial Industry

#artificialintelligence

Artificial intelligence plays an important role in the digitalization of many banks, but it could turn into a regulatory minefield in the coming years. Preparations are underway in the European Union for regulations on the use of artificial intelligence (AI). While the process is still in limbo, the thrust of the planned rules provides clues as to what companies need to prepare for. Swiss AI and the analytics consultancy «Unit8» published a white paper outlining the areas of concern arising from such regulation. Companies would do well to prepare for the new rules and already implement them as a preventive measure, it recommends, even though Switzerland is not part of the EU.


Fear of AI could pose the biggest cyber risk of all

#artificialintelligence

Quick, think of a scary technology – one with the potential to enslave humankind or destroy the earth. Did you think of AI? Few other technologies generate the fear factor of artificial intelligence. Ever since Alan Turing introduced the idea in 1948, people have wondered what would happen if machines outsmarted their creators and took charge of the planet. Legal protections could avert such a calamity, and the first AI regulations have been published and are awaiting public comment. But some of these draft rules set impossibly high standards.


US-EU agreement on artificial intelligence seen as a swipe at China – but little else for now

#artificialintelligence

The US and EU are talking up the significance of their new pact on artificial intelligence, but a closer inspection indicates the two sides still have precious little common when it comes to regulating the technology – except a desire to take the moral high ground against China. The long-awaited agreement was reached when the Trade and Technology Council met for the first time on 29 September in Pittsburgh, with Brussels and Washington vowing to make sure AI systems are "innovative and trustworthy" and "respect universal human rights and shared democratic values". The EU and US will "seek to develop a mutual understanding on the principles underlining trustworthy and responsible AI," the agreement says. But exactly what this means in practice remains to be fleshed out. While both sides said they have noted each other's domestic regulatory proposals on AI, there is no mention of coordinating their approaches.


Breaking down the AI regulations

#artificialintelligence

Companies, governments, and other institutions started embedding artificial intelligence into their products, services, processes, and decision-making to a great extent. This opened great questions on how the data is used by their systems and if any, what are the implications. The answers become even more serious if we take the complex, evolving algorithms that propose health diagnosis, approve a loan, or even autonomously drive a car. Now more than ever, it is essential to develop AI tools that can be trusted and are responsible as AI has and will have wide-ranging economic impacts across manufacturing, transportation, health, education, and many other sectors. This can be done by the development of public sector policies and laws for promoting and regulating AI. It is quite a recent topic among regulators globally as between 2016 and 2020 a wave of AI regulations and guidelines were published in order to maintain social control over the use of algorithms in our everyday lives.


Cryptoassets and artificial intelligence in EU regulation - FinTech Perspectives

#artificialintelligence

Our FinTech Perspectives series, will explore the content, potential, and shortcomings as well as the areas that require further clarification in this important field of European legislation on digital transformation in the financial services sector. Within its overarching plan to Shape Europe's Digital Future, The European Commission is determined to make the lead-up to 2030 Europe's Digital Decade. This ambition has, aside from activities in other fields, resulted in an outpouring of legislative initiatives. The great majority of these initiatives aims to get an ever better handle on our digital reality to date. For all of these, the European legislator needs to reconcile its desire to support technology and business innovation on the one side with the necessary protection for individuals and business in the EU on the other side. Getting this balance right is vital for the success of each of those initiatives.


What the draft European Union AI regulations mean for business

#artificialintelligence

As artificial intelligence (AI) becomes increasingly embedded in the fabric of business and our everyday lives, both corporations and consumer-advocacy groups have lobbied for clearer rules to ensure that it is used fairly. In May, the European Union became the first governmental body in the world to issue a comprehensive response in the form of draft regulations aimed specifically at the development and use of AI. The proposed regulations would apply to any AI system used or providing outputs within the European Union, signaling implications for organizations around the world. Our research shows that many organizations still have a lot of work to do to prepare themselves for this regulation and address the risks associated with AI more broadly. In 2020, only 48 percent of organizations reported that they recognized regulatory-compliance risks, and even fewer (38 percent) reported actively working to address them.


Artificial intelligence: UK and EU take legislative steps - convergence or divergence?

#artificialintelligence

In March this year, the UK government announced an assertive agenda on artificial intelligence (AI) by launching a UK Cyber Security Council and revealing plans to publish a National Artificial Intelligence Strategy (the UK Strategy). The details of the UK Strategy will be released later this year, but at this point we understand that it will focus in particular on promoting growth of the economy through widespread use of AI with, at the same time, an emphasis on ethical, safe, and trustworthy development of AI--including through the development of a legislative framework for AI which will promote public trust and a level playing field. Shortly after the UK government's announcement, the EU Commission published a proposed EU-wide AI legislative framework (the EU Regulation) which is part of the Commission's overall "AI package". The EU Regulation is focused on ensuring the safety of individuals and the protection of fundamental human rights, and categorises AI into unacceptable, high- or low-risk use cases. The EU Regulation proposes to protect users "where the risks that the AI systems pose are particularly high". The definition and categories of high-risk use cases of AI are broad, and capture many if not most use cases that relate to individuals, including AI use in the context of biometric identification and categorisation of natural persons, management of critical infrastructure, and employment and worker management.